algorithmic monoculture
- North America > United States > California (0.04)
- North America > United States > Texas (0.04)
- Asia > Middle East > Jordan (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Research Report > New Finding (0.93)
- Research Report > Experimental Study (0.93)
- North America > United States > California (0.04)
- North America > United States > Texas (0.04)
- Asia > Middle East > Jordan (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Research Report > New Finding (0.93)
- Research Report > Experimental Study (0.93)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (6 more...)
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.68)
- Banking & Finance (0.67)
- Education > Educational Setting > Higher Education (0.46)
Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization?
As the scope of machine learning broadens, we observe a recurring theme of algorithmic monoculture: the same systems, or systems that share components (e.g. While sharing offers advantages like amortizing effort, it also has risks. We introduce and formalize one such risk, outcome homogenization: the extent to which particular individuals or groups experience the same outcomes across different deployments. If the same individuals or groups exclusively experience undesirable outcomes, this may institutionalize systemic exclusion and reinscribe social hierarchy. We relate algorithmic monoculture and outcome homogenization by proposing the component sharing hypothesis: if algorithmic systems are increasingly built on the same data or models, then they will increasingly homogenize outcomes.
When Neutral Summaries are not that Neutral: Quantifying Political Neutrality in LLM-Generated News Summaries
Vijay, Supriti, Priyanshu, Aman, KhudaBukhsh, Ashique R.
In an era where societal narratives are increasingly shaped by algorithmic curation, investigating the political neutrality of LLMs is an important research question. This study presents a fresh perspective on quantifying the political neutrality of LLMs through the lens of abstractive text summarization of polarizing news articles. We consider five pressing issues in current US politics: abortion, gun control/rights, healthcare, immigration, and LGBTQ+ rights. Via a substantial corpus of 20,344 news articles, our study reveals a consistent trend towards pro-Democratic biases in several well-known LLMs, with gun control and healthcare exhibiting the most pronounced biases (max polarization differences of -9.49% and -6.14%, respectively). Further analysis uncovers a strong convergence in the vocabulary of the LLM outputs for these divisive topics (55% overlap for Democrat-leaning representations, 52% for Republican). Being months away from a US election of consequence, we consider our findings important.
- North America > United States > Washington > King County > Seattle (0.14)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > New York (0.04)
- (4 more...)
- Research Report > New Finding (0.68)
- Research Report > Experimental Study (0.48)
Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization?
As the scope of machine learning broadens, we observe a recurring theme of algorithmic monoculture: the same systems, or systems that share components (e.g. While sharing offers advantages like amortizing effort, it also has risks. We introduce and formalize one such risk, outcome homogenization: the extent to which particular individuals or groups experience the same outcomes across different deployments. If the same individuals or groups exclusively experience undesirable outcomes, this may institutionalize systemic exclusion and reinscribe social hierarchy. We relate algorithmic monoculture and outcome homogenization by proposing the component sharing hypothesis: if algorithmic systems are increasingly built on the same data or models, then they will increasingly homogenize outcomes.
Kathleen Creel: Examining Ethical Questions in AI
It's safe to say that Plato and his contemporaries never grappled with moral questions raised by the development of neural networks or issues surrounding data privacy and security. But a few modern philosophers – like Kathleen Creel – are doing just that as they harness age-old ideas about knowledge, existence, and ethics to understand and respond to the challenges posed by today's technology. "I still get a lot from Plato and other historical philosophers, but the task of philosophy is to figure out what the questions of a particular age, of a particular society or culture are, and to ask how philosophy can help to address them," Creel says. "It gives us a clearer moral system to help sort through what our priorities ought to be, and how we should act in our lives." Creel is finishing a two-year Embedded EthiCS Postdoctoral Fellowship based at Stanford's McCoy Family Center for Ethics in Society and the Institute for Human-Centered Artificial Intelligence (HAI).